In recent years the applications of machine learning models have increased rapidly, due to the large amount of available data and technological progress.While some domains like web analysis can benefit from this with only minor restrictions, other fields like in medicine with patient data are strongerregulated. In particular \emph{data privacy} plays an important role as recently highlighted by the trustworthy AI initiative of the EU or general privacy regulations in legislation. Another major challenge is, that the required training \emph{data is} often \emph{distributed} in terms of features or samples and unavailable for classicalbatch learning approaches. In 2016 Google came up with a framework, called \emph{Federated Learning} to solve both of these problems. We provide a brief overview on existing Methods and Applications in the field of vertical and horizontal \emph{Federated Learning}, as well as \emph{Fderated Transfer Learning}.
translated by 谷歌翻译
矩阵近似是大规模代数机器学习方法中的关键元件。最近提出的方法Meka(Si等人,2014)有效地使用了希尔伯特空间中的两个常见假设:通过固有的换档内核功能和数据紧凑性假设获得的内部产品矩阵的低秩属性块集群结构。在这项工作中,我们不仅适用于换档内核,而且扩展Meka,而且还适用于多项式内核和极端学习内核等非静止内核。我们还详细介绍了如何在MEKA中处理非正面半定位内核功能,由近似自身或故意使用通用内核功能引起的。我们展示了一种基于兰兹的估计频谱转变,以发展稳定的正半定梅卡近似,也可用于经典凸优化框架。此外,我们支持我们的调查结果,具有理论考虑因素和各种综合性和现实世界数据的实验。
translated by 谷歌翻译